8 research outputs found

    Maximum Ignorance Polynomial Colour Correction

    Get PDF
    In colour correction, we map the RGBs captured by a camera to human visual system referenced colour coordinates including sRGB and CIE XYZ. Two of the simplest methods reported are linear and polynomial regression. However, to obtain optimal performance using regression – especially for a polynomial based method - requires a large corpus of training data and this is time consuming to obtain. If one has access to device spectral sensitivities, then an alternative approach is to generate RGBs synthetically (we numerically generate camera RGBs from measured surface reflectances and light spectra). Advantageously, there is no limit to the number of training samples we might use. In the limit – under the so-called maximum ignorance with positivity colour correction - all possible colour signals are assumed. In this work, we revisit the maximum ignorance idea in the context of polynomial regression. The formulation of the problem is much trickier, but we show – albeit with some tedious derivation – how we can solve for the polynomial regression matrix in closed form. Empirically, however, this new polynomial maximum ignorance regression delivers significantly poorer colour correction performance compared with a physical target based method. So, this negative result teaches that the maximum ignorance technique is not directly applicable to non-linear methods. However, the derivation of this result leads to some interesting mathematical insights which point to how a maximum-ignorance type approach can be followed

    Integrating colour correction algorithms

    Get PDF
    Digital cameras sense colour different than the human visual system (HVS). Digital cameras sense colour using imaging sensor, whereas the HVS senses colour using the cone photoreceptors in our retina. Each digital camera model has its own device specific spectral sensitivity function. It is therefore necessary to convert the device specific colour responses of an imaging sensor to values that are related to the HVS. This process is typically referred to as colour correction, and it is common to the image processing pipeline across all cameras. In this thesis, we explore the topic of colour correction for digital cameras. Colour correction algorithms establish the mapping between device specific responses of the camera with HVS related colour responses. Colour correction algorithms typically need to be trained with datasets. During the training process, we adjust the parameters of the colour correction algorithm, in order to minimise the fitting error between the device specific responses and the corresponding HVS responses. In this thesis, we first show that the choice of the training dataset affects the performance of the colour correction algorithm. Then, we propose to circumvent this problem by considering a reflectance dataset as a set of samples of a much larger reflectance space. We approximate the convex closure of the reflectance dataset in the reflectance space using a hypercube. Finally we integrate over this hypercube in order to calculate a matrix for linear colour correction. By computing the linear colour correction matrix this way, we are able to fill in the gap within a reflectance dataset. We then expand upon the idea of reflectance space further, by allowing all possible reflectances. We explore an alternative formulation of Maximum Ignorance with Positivity (MIP) colour correction. Our alternative formulation allows us to develop a polynomial variant of the concept. Polynomial MIP colour correction is far more complex thant MIP colour correction in terms of formulation. Our contribution is theoretically interesting, however practically, it delivers poorer performance

    Colour Correction Toolbox

    Get PDF
    For a camera image, the RGB response from the imaging sensor cannot be used to drive display devices directly. The reason behind this is two-fold: different cameras have different spectral sensitivities, and there are different target output spaces (e.g. sRGB, Adobe RGB, and XYZ). The process of mapping from captured RGBs to an output colour space is called colour correction. Colour Correction is of interest in its own right (e.g. for colour measurement), but it is also an important part of the colour processing pipelines found in digital cameras. In this paper, we look at the problem of mapping device RGB values to corresponding CIE XYZ tristimuli. We make three contributions. First, we review and implement a range of colour correction algorithms. We benchmark these algorithms in experiments using both synthetic data (so we can numerically assess a wider range of cameras) and real image data. In our second contribution, we develop an ensemble method to combine colour correction algorithms to further enhance performance. For the methods tested, we find there is small extra power in combining the methods. Our final — and perhaps most important contribution — is to provide an open source colour correction MATLAB toolbox for the community, implementing the algorithms described in the paper. As well, all our experimental data is provided

    3D color homography model for photo-realistic color transfer re-coding

    Get PDF
    Color transfer is an image editing process that naturally transfers the color theme of a source image to a target image. In this paper, we propose a 3D color homography model which approximates photo-realistic color transfer algorithm as a combination of a 3D perspective transform and a mean intensity mapping. A key advantage of our approach is that the re-coded color transfer algorithm is simple and accurate. Our evaluation demonstrates that our 3D color homography model delivers leading color transfer re-coding performance. In addition, we also show that our 3D color homography model can be applied to color transfer artifact fixing, complex color transfer acceleration, and color-robust image stitching

    Height from Photometric Ratio with Model-based Light Source Selection

    Get PDF
    In this paper, we present a photometric stereo algorithm for estimating surface height. We follow recent work that uses photometric ratios to obtain a linear formulation relating surface gradients and image intensity. Using smoothed finite difference approximations for the surface gradient, we are able to express surface height recovery as a linear least squares problem that is large but sparse. In order to make the method practically useful, we combine it with a model-based approach that excludes observations which deviate from the assumptions made by the image formation model. Despite its simplicity, we show that our algorithm provides surface height estimates of a high quality even for objects with highly non-Lambertian appearance. We evaluate the method on both synthetic images with ground truth and challenging real images that contain strong specular reflections and cast shadows

    SoccerNet 2022 Challenges Results

    Full text link
    peer reviewedThe SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team. In 2022, the challenges were composed of 6 vision-based tasks: (1) action spotting, focusing on retrieving action timestamps in long untrimmed videos, (2) replay grounding, focusing on retrieving the live moment of an action shown in a replay, (3) pitch localization, focusing on detecting line and goal part elements, (4) camera calibration, dedicated to retrieving the intrinsic and extrinsic camera parameters, (5) player re-identification, focusing on retrieving the same players across multiple views, and (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams. Compared to last year's challenges, tasks (1-2) had their evaluation metrics redefined to consider tighter temporal accuracies, and tasks (3-6) were novel, including their underlying data and annotations. More information on the tasks, challenges and leaderboards are available on https://www.soccer-net.org. Baselines and development kits are available on https://github.com/SoccerNet .Applications et Recherche pour une Intelligence Artificielle de Confiance (ARIAC

    SoccerNet 2022 Challenges Results

    No full text
    peer reviewedThe SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team. In 2022, the challenges were composed of 6 vision-based tasks: (1) action spotting, focusing on retrieving action timestamps in long untrimmed videos, (2) replay grounding, focusing on retrieving the live moment of an action shown in a replay, (3) pitch localization, focusing on detecting line and goal part elements, (4) camera calibration, dedicated to retrieving the intrinsic and extrinsic camera parameters, (5) player re-identification, focusing on retrieving the same players across multiple views, and (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams. Compared to last year's challenges, tasks (1-2) had their evaluation metrics redefined to consider tighter temporal accuracies, and tasks (3-6) were novel, including their underlying data and annotations. More information on the tasks, challenges and leaderboards are available on https://www.soccer-net.org. Baselines and development kits are available on https://github.com/SoccerNet .Applications et Recherche pour une Intelligence Artificielle de Confiance (ARIAC
    corecore